Bayesian Multitask Inverse Reinforcement Learning

نویسندگان

  • Christos Dimitrakakis
  • Constantin A. Rothkopf
چکیده

We generalise the problem of inverse reinforcement learning to multiple tasks, from multiple demonstrations. Each one may represent one expert trying to solve a different task, or as different experts trying to solve the same task. Our main contribution is to formalise the problem as statistical preference elicitation, via a number of structured priors, whose form captures our biases about the relatedness of different tasks or expert policies. In doing so, we introduce a prior on policy optimality, which is more natural to specify. We show that our framework allows us not only to learn to efficiently from multiple experts but to also effectively differentiate between the goals of each. Possible applications include analysing the intrinsic motivations of subjects in behavioural experiments and learning from multiple teachers.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Task Reinforcement Learning Using Hierarchical Bayesian Models

For this project, the objective was to build a working implementation of a multi-task reinforcement learning (MTRL) agent using a hierarchical Bayesian model (HBM) framework described in the paper “Multitask reinforcement learning: A hierarchical Bayesian approach” (Wilson, et al. 2007). This agent was then to play a modified version of the game of Pacman. In this version of the classic arcade ...

متن کامل

Preference elicitation and inverse reinforcement learning

We state the problem of inverse reinforcement learning in terms of preference elicitation, resulting in a principled (Bayesian) statistical formulation. This generalises previous work on Bayesian inverse reinforcement learning and allows us to obtain a posterior distribution on the agent’s preferences, policy and optionally, the obtained reward sequence, from observations. We examine the relati...

متن کامل

Preference Elicitation and Inverse Reinforcement Learning

We state the problem of inverse reinforcement learning in terms of preference elicitation, resulting in a principled (Bayesian) statistical formulation. This generalises previous work on Bayesian inverse reinforcement learning and allows us to obtain a posterior distribution on the agent’s preferences, policy and optionally, the obtained reward sequence, from observations. We examine the relati...

متن کامل

Efficient Inverse Reinforcement Learning using Adaptive State-Graphs

Inverse Reinforcement Learning (IRL) provides a powerful mechanism for learning complex behaviors from demonstration by rationalizing such demonstrations. Unfortunately its applicability has been largely hindered by lack of powerful representations that can take advantage of various task affordances while still admitting scalability. Inspired by the success of sampling based approaches in class...

متن کامل

Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions

We present a nonparametric Bayesian approach to inverse reinforcement learning (IRL) for multiple reward functions. Most previous IRL algorithms assume that the behaviour data is obtained from an agent who is optimizing a single reward function, but this assumption is hard to guarantee in practice. Our approach is based on integrating the Dirichlet process mixture model into Bayesian IRL. We pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011